Sketches for Matrix Norms: Faster, Smaller and More General

نویسندگان

  • Vladimir Braverman
  • Stephen R. Chestnut
  • Robert Krauthgamer
  • Lin F. Yang
چکیده

We design new sketching algorithms for unitarily invariant matrix norms, including the Schatten p-norms ‖·‖Sp , and obtain, as a by-product, streaming algorithms that approximate the norm of a matrix A presented as a turnstile data stream. The primary advantage of our streaming algorithms is that they are simpler and faster than previous algorithms, while requiring the same or less storage. Our three main results are a faster sketch for estimating ‖A‖Sp , a smaller-space O(1)-pass sketch for ‖A‖Sp , and more general sketching technique that yields sublinear-space approximations for a wide class of matrix norms. These improvements are powered by dimensionality reduction techniques that are modern incarnations of the JohnsonLindenstrauss Lemma [JL84]. When p ≥ 2 is even or A is PSD, our fast one-pass algorithm approximates ‖A‖Sp in optimal, n, space with O(1) update time and o(n) time to extract the approximation from the sketch, while the ⌈p/2⌉-pass algorithm is built on a smaller sketch of size n with O(1) update time and n query time. Finally, for a PSD matrix A and a unitarily invariant norm l(·), we prove that one can obtain an approximation to l(A) from a sketch GAH where G and H are independent Oblivious Subspace Embeddings and the dimension of the sketch is polynomial in the intrinsic dimension of A. The intrinsic dimension of a matrix is a robust version of the rank that is equal to the ratio ∑ i σi/σ1. It is small, e.g., for models in machine learning which consist of a low rank matrix plus noise. Naturally, this leads to much smaller sketches for many norms. Email: [email protected]. This material is based upon work supported in part by the National Science Foundation under Grant No. 1447639, by the Google Faculty Award and by DARPA grant N660001-1-2-4014. Its contents are solely the responsibility of the authors and do not represent the official view of DARPA or the Department of Defense. Email: [email protected] Email: [email protected]. Work supported in part by the Israel Science Foundation grant #897/13. Email: [email protected]

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sublinear Time Orthogonal Tensor Decomposition

A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor. In...

متن کامل

Nearly-optimal bounds for sparse recovery in generic norms, with applications to k-median sketching

We initiate the study of trade-offs between sparsity and the number of measurements in sparse recovery schemes for generic norms. Specifically, for a norm ‖ ·‖, sparsity parameter k, approximation factor K > 0, and probability of failure P > 0, we ask: what is the minimal value of m so that there is a distribution over m × n matrices A with the property that for any x, given Ax, we can recover ...

متن کامل

On Sketching Matrix Norms and the Top Singular Vector

Sketching is a prominent algorithmic tool for processinglarge data. In this paper, we study the problem of sketchingmatrix norms. We consider two sketching models. The firstis bilinear sketching, in which there is a distribution overpairs of r×n matrices S and n× s matrices T such that forany fixed n×n matrix A, from S ·A ·T one can approximate‖A‖p up to an approxima...

متن کامل

A Projected Alternating Least square Approach for Computation of Nonnegative Matrix Factorization

Nonnegative matrix factorization (NMF) is a common method in data mining that have been used in different applications as a dimension reduction, classification or clustering method. Methods in alternating least square (ALS) approach usually used to solve this non-convex minimization problem.  At each step of ALS algorithms two convex least square problems should be solved, which causes high com...

متن کامل

Preconditioned Generalized Minimal Residual Method for Solving Fractional Advection-Diffusion Equation

Introduction Fractional differential equations (FDEs)  have  attracted much attention and have been widely used in the fields of finance, physics, image processing, and biology, etc. It is not always possible to find an analytical solution for such equations. The approximate solution or numerical scheme  may be a good approach, particularly, the schemes in numerical linear algebra for solving ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1609.05885  شماره 

صفحات  -

تاریخ انتشار 2016